低精度算术对神经网络的训练产生了变革性的影响,从而减少了计算,记忆和能量需求。然而,尽管有希望,低精确的算术对高斯流程(GPS)的关注很少,这主要是因为GPS需要在低精确度中不稳定的复杂线性代数例程。我们研究以一半精度训练GP时可能发生的不同故障模式。为了避免这些故障模式,我们提出了一种多方面的方法,该方法涉及具有重新构造,混合精度和预处理的共轭梯度。我们的方法大大提高了低精度在各种设置中的偶联梯度的数值稳定性和实践性能,从而使GPS能够在单个GPU上以10美元的$ 10 $ 10 $ 10 $ 10 $ 10的数据点进行培训,而没有任何稀疏的近似值。
translated by 谷歌翻译
随机微分方程的系统定义了一系列随机波动率模型。尽管这些模型在金融和统计气候学等领域中取得了广泛的成功,但它们通常缺乏在历史数据上条件产生真正的后验分布的能力。为了解决这一基本限制,我们展示了如何将一类随机波动率模型重新塑造为具有专门协方差函数的层次高斯工艺(GP)模型。该GP模型保留了随机波动率模型的电感偏差,同时提供了GP推断给出的后验预测分布。在此框架内,我们从研究良好的域中汲取灵感,以引入新的型号,即Volt和Magpie,这些模型在库存和风速预测中的表现明显超过了基线,并且自然扩展到多任务设置。
translated by 谷歌翻译
虽然最近的共轭梯度方法和LanczoS分解的工作已经实现了可扩展的高斯工艺推论,但在几种实现中,这些迭代方法似乎在学习内核超参数中的数值不稳定性以及较差的测试可能性方面似乎奋斗。通过调查CG公差,预处理等级和Lanczos分解等级,我们提供了一个特别简单的处方来纠正这些问题:我们建议人们使用小的CG公差($ \ epsilon \ leq 0.01 $)和大的根分解大小($ r \ geq 5000 $)。此外,我们表明L-BFGS-B是迭代GPS的引人注目的优化器,实现了较少的渐变更新的收敛性。
translated by 谷歌翻译
通过更好地了解多层网络的损失表面,我们可以构建更强大和准确的培训程序。最近发现,独立训练的SGD解决方案可以沿近持续训练损失的一维路径连接。在本文中,我们表明存在模式连接的单纯复合物,形成低损耗的多维歧管,连接许多独立培训的型号。灵感来自这一发现,我们展示了如何有效地建立快速合奏的单纯性复杂,表现优于准确性,校准和对数据集移位的鲁棒性的独立培训的深度集合。值得注意的是,我们的方法只需要几个训练时期来发现低损失单纯乳,从预先接受训练的解决方案开始。代码可在https://github.com/g-benton/loss-surface-simplexes中获得。
translated by 谷歌翻译
贝叶斯优化(Bayesopt)是查询有效连续优化的黄金标准。然而,决策变量的离散,高维质阻碍了其对药物设计的采用。我们开发了一种新方法(LAMBO),该方法通过判别性多任务高斯流程主管共同训练Denoising AutoCododer,从而使基于梯度的多目标采集功能优化了自动装编码器的潜在空间。这些采集功能使Lambo能够在多个设计回合上平衡探索探索折衷方案,并通过在Pareto边境上的许多不同地点优化序列来平衡客观权衡。我们在两个小分子设计任务上评估了兰博,并引入了优化\ emph {在硅}和\ emph {Inter {In Betro}特性的新任务。在我们的实验中,兰博的表现优于遗传优化者,并且不需要大量的预处理,表明贝叶诺斯对生物序列设计是实用且有效的。
translated by 谷歌翻译
基于物理仿真的优化是科学与工程的共同任务。许多这样的模拟产生了所需目标的图像或张量的输出,其中所需的目标是那些输出的函数,并且在高维参数空间上执行优化。我们开发贝叶斯优化方法利用张量的高斯工艺代理和信任区域贝叶斯优化,以有效地模拟图像输出,并有效地优化这些类型的模拟,包括射频塔配置问题和光学设计问题。
translated by 谷歌翻译
我们仔细比较了两种无模型控制算法,演进策略和近端政策优化(PPO),具有后退地平线模型预测控制(MPC),用于操作模拟,价格响应式热水器。考虑了四个MPC变体:单次控制器,具有完美预测产生最佳控制;一个有限的地平控制器,具有完美预测;基于平均的预测控制器;使用历史情景,一个两阶段随机编程控制器。在所有情况下,水温和电价的MPC模型精确;只有水需求不确定。为了比较,ES和PPO通过在MPC使用的相同场景下直接与模拟环境直接交互来学习基于神经网络的策略。然后在需求时间序列的单独一周继续的单独一周内进行评估所有方法。我们证明了对这个问题的最佳控制是具有挑战性的,需要超过8小时的MPC寻找,具有完美预测来获得最低成本。尽管存在这一挑战,但ES和PPO都学会了在平均成本方面优于平均预测和两级随机MPC控制器的良好通用政策,并且在计算动作时速度越来越多的数量级。我们表明ES尤其可以利用并行性,使用1150 CPU核心在90秒内学习策略。
translated by 谷歌翻译
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including MC dropout, KFAC Laplace, SGLD, and temperature scaling.
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译